Riemannian Stochastic Variance-Reduced Cubic Regularized Newton Method for Submanifold Optimization

نویسندگان

چکیده

We propose a stochastic variance-reduced cubic regularized Newton algorithm to optimize the finite-sum problem over Riemannian submanifold of Euclidean space. The proposed requires full gradient and Hessian update at beginning each epoch while it performs updates in iterations within epoch. iteration complexity $$O(\epsilon ^{-3/2})$$ obtain an $$(\epsilon ,\sqrt{\epsilon })$$ -second-order stationary point, i.e., point with norm upper bounded by $$\epsilon $$ minimum eigenvalue lower $$-\sqrt{\epsilon }$$ , is established when manifold embedded Furthermore, paper proposes computationally more appealing modification which only inexact solution subproblem same complexity. evaluated compared three other second-order methods two numerical studies on estimating inverse scale matrix multivariate t-distribution symmetric positive definite matrices parameter linear classifier sphere manifold.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Stochastic Variance-Reduced Cubic Regularized Newton Method

We propose a stochastic variance-reduced cubic regularized Newton method for non-convex optimization. At the core of our algorithm is a novel semi-stochastic gradient along with a semi-stochastic Hessian, which are specifically designed for cubic regularization method. We show that our algorithm is guaranteed to converge to an ( , √ )-approximately local minimum within Õ(n/ ) second-order oracl...

متن کامل

A Variance Reduced Stochastic Newton Method

Quasi-Newton methods are widely used in practise for convex loss minimization problems. These methods exhibit good empirical performance on a wide variety of tasks and enjoy super-linear convergence to the optimal solution. For largescale learning problems, stochastic Quasi-Newton methods have been recently proposed. However, these typically only achieve sub-linear convergence rates and have no...

متن کامل

Riemannian stochastic variance reduced gradient

Stochastic variance reduction algorithms have recently become popular for minimizing the average of a large but finite number of loss functions. In this paper, we propose a novel Riemannian extension of the Euclidean stochastic variance reduced gradient algorithm (R-SVRG) to a manifold search space. The key challenges of averaging, adding, and subtracting multiple gradients are addressed with r...

متن کامل

Stochastic Variance Reduced Riemannian Eigensolver

We study the stochastic Riemannian gradient algorithm for matrix eigendecomposition. The state-of-the-art stochastic Riemannian algorithm requires the learning rate to decay to zero and thus suffers from slow convergence and suboptimal solutions. In this paper, we address this issue by deploying the variance reduction (VR) technique of stochastic gradient descent (SGD). The technique was origin...

متن کامل

Regularized Newton method for unconstrained convex optimization

We introduce the regularized Newton method (rnm) for unconstrained convex optimization. For any convex function, with a bounded optimal set, the rnm generates a sequence that converges to the optimal set from any starting point. Moreover the rnm requires neither strong convexity nor smoothness properties in the entire space. If the function is strongly convex and smooth enough in the neighborho...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of Optimization Theory and Applications

سال: 2022

ISSN: ['0022-3239', '1573-2878']

DOI: https://doi.org/10.1007/s10957-022-02137-5